• Rohit Krishnan's article, "What Comes After?" discusses the recent veto by California Governor Gavin Newsom of a bill aimed at regulating artificial intelligence (AI) models, known as SB 1047. The bill had garnered significant support in the California Assembly but faced opposition that questioned its effectiveness and the evidence behind its proposed regulations. Governor Newsom's veto highlights concerns about the bill's focus on large-scale AI models, suggesting that it could create a false sense of security while neglecting the potential risks posed by smaller, specialized models. He emphasizes the need for a regulatory framework that is adaptable and evidence-based, particularly as AI technology continues to evolve rapidly. The governor's statement indicates a commitment to collaborating with leading experts in the field to develop more effective guardrails for AI deployment, aiming for a science-based approach to understanding the capabilities and risks associated with frontier models. The article reflects on the broader implications of this veto, suggesting that the debate over AI regulation is far from settled and will likely continue to evolve. Krishnan points out that while the bill aimed to address existential risks and potential misuse of AI, it ultimately lacked concrete evidence to support its more extreme claims. He argues that regulations should be grounded in a clear understanding of the technology and its implications, rather than speculative fears. Krishnan proposes that future regulations should prioritize human flourishing and innovation, ensuring that the benefits of AI can be harnessed while minimizing unnecessary restrictions. He advocates for a balanced approach that allows for the exploration of AI's potential while maintaining safety, particularly in high-risk applications. The article concludes with a call for thoughtful regulation that is informed by evidence and focused on the practical implications of AI technology, rather than being driven by fear or speculation. Overall, the piece underscores the complexity of regulating a rapidly advancing technology like AI and the need for a nuanced approach that considers both the risks and the transformative potential of AI innovations.

  • Rohit Krishnan's article, "What Comes After?" discusses the recent veto by California Governor Gavin Newsom of a bill aimed at regulating artificial intelligence (AI) models, known as SB 1047. The bill had garnered significant support in the California Assembly but faced opposition that questioned its effectiveness and the evidence backing its proposed regulations. Governor Newsom's veto highlights concerns about the bill's focus on large-scale AI models, suggesting that it could create a false sense of security while potentially overlooking the risks posed by smaller, specialized models. He emphasizes the need for a regulatory framework that is adaptable and considers the context in which AI systems are deployed, particularly in high-risk environments. The governor acknowledges the importance of taking action to protect the public from potential threats posed by AI technology but argues that the approach taken by SB 1047 was not the most effective. In his statement, Newsom calls for collaboration with leading experts in the field of generative AI to develop evidence-based regulations. This initiative aims to create a more informed understanding of the capabilities and risks associated with frontier AI models. The governor's commitment to working with experts like Dr. Fei-Fei Li and others indicates a shift towards a more empirical approach to AI regulation. Krishnan reflects on the broader implications of the veto, suggesting that the debate over AI regulation is likely to continue and evolve. He points out that while SB 1047 aimed to address existential risks and large-scale threats, the lack of concrete evidence for such risks complicates the regulatory landscape. The article critiques the notion of passing minimally restrictive regulations without a clear understanding of their benefits, arguing that regulations should be grounded in evidence and focused on promoting human flourishing. The author proposes several principles for future AI regulations, emphasizing the importance of understanding the technology, solving user problems, and maintaining minimal restrictions to foster innovation. He advocates for a regulatory approach that prioritizes the potential benefits of AI while being cautious about imposing unnecessary bureaucratic hurdles. Krishnan concludes by acknowledging the challenges policymakers face in navigating the rapidly evolving AI landscape. He stresses the need for a balanced approach that allows for innovation while addressing legitimate concerns about safety and ethical implications. The article serves as a call for thoughtful, evidence-based regulation that can adapt to the complexities of AI technology and its impact on society.